Singular Value Decomposition - Jacobi and Lanczos Methods

Since computing the SVD of $A$ can be seen as computing the EVD of the symmetric matrices $A^*A$, $AA^*$, or $\begin{bmatrix}0 & A \\ A^* & 0 \end{bmatrix}$, simple modifications of the corresponding EVD algorithms yield version for computing the SVD.

For more details on one-sided Jacobi method, see Z. Drmač, Computing Eigenvalues and Singular Values to High Relative Accuracy and the references therein.

Prerequisites

The reader should be familiar with concepts of singular values and vectors, related perturbation theory, and algorithms, and Jacobi and Lanczos methods for the symmetric eigenvalue decomposition.

Competences

The reader should be able to recognise matrices which warrant high relative accuracy and to apply Jacobi method to them. The reader should be able to recognise matrices to which Lanczos method can be efficiently applied and do so.

One-sided Jacobi method

Let $A\in\mathbb{R}^{m\times n}$ with $\mathop{\mathrm{rank}}(A)=n$ (therefore, $m\geq n$) and $A=U\Sigma V^T$ its thin SVD.

Definition

Let $A=BD$, where $D=\mathop{\mathrm{diag}} (\| A_{:,1}\|_2, \ldots, \|A_{:,n}\|_2)$ is a diagonal scaling , and $B$ is the scaled matrix of $A$ from the right. Then $[B^T B]_{i,i}=1$.

Facts

  1. Let $\tilde U$, $\tilde V$ and $\tilde \Sigma$ be the approximations of $U$, $V$ and $\Sigma$, respectively, computed by a backward stable algorithm as $A+\Delta A=\tilde U\tilde \Sigma \tilde V^T$. Since the orthogonality of $\tilde U$ and $\tilde V$ cannot be guaranteed, this product in general does not represent and SVD. There exist nearby orthogonal matrices $\hat U$ and $\hat V$ such that $(I+E_1)(A+\Delta A)(I+E_2)=\hat U \tilde \Sigma \hat V^T$, where departures from orthogonalithy, $E_1$ and $E_2$, are small in norm.

  2. Standard algorithms compute the singular values with backward error $\| \Delta A\|\leq \phi\varepsilon \|A\|_2$, where $\varepsilon$ is machine precision and $\phi$ is a slowly growing function og $n$. The best error bound for the singular values is $|\sigma_j-\tilde \sigma_j|\leq \| \Delta A\|_2$, and the best relative error bound is $$ \max_j \frac{|\sigma_j-\tilde\sigma_j|}{\sigma_j}\leq \frac{\| \Delta A \|_2}{\sigma_j} \leq \phi \varepsilon \kappa_2(A). $$

  3. Let $\|[\Delta A]_{:,j}\|_2\leq \varepsilon \| A_{:,j}\|_2$ for all $j$. Then $A+\Delta A=(B+\Delta B)D$ and $\|\Delta B\|_F\leq \sqrt{n}\varepsilon$, and $$ \max_j \frac{|\sigma_j-\tilde\sigma_j|}{\sigma_j}\leq \| (\Delta B) B^{\dagger} \|_2\leq \sqrt{n}\varepsilon \| B^{\dagger}\|_2. $$ This is Fact 3 from the Relative perturbation theory.

  4. It holds $$ \| B^{\dagger} \| \leq \kappa_2(B) \leq \sqrt{n} \min_{S=\mathop{\mathrm{diag}}} \kappa_2(A\cdot S)\leq \sqrt{n}\,\kappa_2(A). $$ Therefore, numerical algorithm with column-wise small backward error computes singular values more accurately than an algorithm with small norm-wise backward error.

  5. In each step, one-sided Jacobi method computes the Jacobi rotation matrix from the pivot submatrix of the current Gram matrix $A^TA$. Afterwards, $A$ is multiplied with the computed rotation matrix from the right (only two columns are affected). Convergence of the Jacobi method for the symmetric matrix $A^TA$ to a diagonal matrix, implies that the matrix $A$ converges to the matrix $AV$ with orthogonal columns and $V^TV=I$. Then $AV=U\Sigma$, $\Sigma=\mathop{\mathrm{diag}}(\| A_{:,1}\|_2, \ldots, \| A_{:,n}\|_2)$, $U=AV\Sigma^{-1}$, and $A=U\Sigma V^T$ is the SVD of $A$.

  6. One-sided Jacobi method computes the SVD with error bound from Facts 2 and 3, provided that the condition of the intermittent scaled matrices does not grow much. There is overwhelming numerical evidence for this. Alternatively, if $A$ is square, the one-sided Jacobi method can be applied to the transposed matrix $A^T=DB^T$ and the same error bounds apply, but the condition of the scaled matrix (this time from the left) does not change. This approach is slower.

  7. One-sided Jacobi method can be preconditioned by applying one QR factorization with full pivoting and one QR factorization withour pivoting to $A$, to obtain faster convergence, without sacrifying accuracy. This method is implemented in the LAPACK routine DGESVJ. Writing the wrapper for DGESVJ is a tutorial assignment.

Example - Standard matrix


In [1]:
using LinearAlgebra

In [8]:
function myJacobiR(A1::AbstractMatrix)
    A=deepcopy(A1)
    m,n=size(A)
    T=typeof(A[1,1])
    V=Matrix{T}(I,n,n)
    # Tolerance for rotation
    tol=sqrt(map(T,n))*eps(T)
    # Counters
    p=n*(n-1)/2
    sweep=0
    pcurrent=0
    # First criterion is for standard accuracy, second one is for relative accuracy
    # while sweep<30 && vecnorm(A-diagm(diag(A)))>tol
    while sweep<30 && pcurrent<p
        sweep+=1
        # Row-cyclic strategy
        for i = 1 : n-1 
            for j = i+1 : n
                # Compute the 2 x 2 sumbatrix of A'*A
                F=A[:,[i,j]]'*A[:,[i,j]]
                # Check the tolerance - the first criterion is standard,
                # the second one is for relative accuracy               
                # if A[i,j]!=zero(T)
                # 
                if abs(F[1,2])>tol*sqrt(F[1,1]*F[2,2])
                    # Compute c and s
                    τ=(F[1,1]-F[2,2])/(2*F[1,2])
                    t=sign(τ)/(abs(τ)+sqrt(1+τ^2))
                    c=1/sqrt(1+t^2)
                    s=c*t
                    G=LinearAlgebra.Givens(i,j,c,s)
                    # A*=G'
                    rmul!(A,adjoint(G))
                    # V*=G'
                    rmul!(V,adjoint(G))
                    pcurrent=0
                else
                    pcurrent+=1
                end
            end
        end
    end
    σ=[norm(A[:,k]) for k=1:n]
    for k=1:n
        A[:,k]./=σ[k]
    end
    # A, σ, V
    SVD(A,σ,adjoint(V))
end


Out[8]:
myJacobiR (generic function with 1 method)

In [9]:
m=8
n=5
import Random
Random.seed!(432)
A=rand(-9.0:9,m,n)


Out[9]:
8×5 Array{Float64,2}:
  5.0   6.0   7.0  -2.0  -3.0
  7.0   7.0  -3.0   3.0   3.0
 -7.0   5.0  -7.0   0.0  -7.0
 -4.0  -2.0   8.0   4.0  -5.0
 -9.0   5.0   4.0  -6.0   9.0
  0.0   4.0  -7.0   9.0  -1.0
 -7.0   5.0  -2.0  -4.0   3.0
 -6.0  -3.0  -5.0   2.0   7.0

In [10]:
U,σ,V=myJacobiR(A)


Out[10]:
SVD{Float64,Float64,Array{Float64,2}}
U factor:
8×5 Array{Float64,2}:
  0.24824    0.331266    0.348246    0.105505   -0.462337
  0.181046   0.666841   -0.245326    0.222032   -0.0526094
 -0.172452  -0.280545   -0.447492   -0.294623   -0.649063
  0.110458  -0.518481    0.270339    0.659338   -0.241586
 -0.691779   0.264766    0.309096    0.283422   -0.132881
  0.106257   0.0618798  -0.61431     0.530358   -0.1044
 -0.465547   0.120751   -0.0553207  -0.0817947  -0.285169
 -0.396328  -0.0977143  -0.263117    0.223032    0.4404
singular values:
5-element Array{Float64,1}:
 19.69791263939664
 14.158291869993421
 17.724237830879755
  8.713089284820876
 13.359209460290945
Vt factor:
5×5 Array{Float64,2}:
  0.76844    -0.126868   0.13642     0.338361  -0.510202
  0.54527     0.618575  -0.0701152  -0.172764   0.534118
  0.0710395  -0.158255   0.870651   -0.453584   0.0783349
 -0.207696    0.213006   0.445206    0.796315   0.281362
  0.252975   -0.728597  -0.142236    0.125731   0.607545

In [11]:
# Residual 
A*V-U*Diagonal(σ)


Out[11]:
8×5 Array{Float64,2}:
  8.88178e-16   0.0          -8.88178e-16  -1.77636e-15   8.88178e-16
  2.22045e-15   8.88178e-15   1.77636e-15   0.0           7.77156e-16
 -3.10862e-15   1.33227e-15  -8.88178e-16   0.0           0.0
 -2.66454e-15  -1.77636e-15   3.55271e-15   0.0          -4.44089e-16
  1.77636e-15   2.66454e-15  -1.77636e-15   1.33227e-15  -2.22045e-16
  0.0           1.22125e-15  -3.55271e-15   0.0           8.88178e-16
  1.77636e-15   1.11022e-15  -1.88738e-15   3.33067e-16   8.88178e-16
  1.77636e-15  -2.44249e-15  -8.88178e-16   2.66454e-15  -1.77636e-15

In [13]:
# Orthogonality
norm(U'*U-I),norm(V'*V-I)


Out[13]:
(2.8484213703573437e-16, 2.0024121831055014e-15)

Example - Strongly scaled matrix


In [20]:
m=20
n=15
B=rand(m,n)
D=exp.(50*(rand(n).-0.5))
A=B*Diagonal(D)


Out[20]:
20×15 Array{Float64,2}:
 1.28497    9.29526e-12  7.19693e-12  0.0133595   …       1.19939e6  292687.0
 0.102904   1.55081e-11  7.00289e-11  0.152731            2.45487e6  513804.0
 0.579145   1.28878e-11  4.63772e-11  0.0436482      276758.0        162486.0
 0.461157   1.2992e-11   2.11951e-11  0.0599675      613530.0        394949.0
 0.50676    1.34353e-11  3.27004e-11  0.048683       114398.0        441071.0
 0.832111   9.25837e-12  5.85074e-11  0.0811439   …       2.78822e6  228584.0
 0.794743   3.61928e-13  7.05965e-11  0.143774            2.11064e6  436372.0
 0.387919   1.50881e-11  4.87205e-11  0.156039       292964.0        147750.0
 0.694528   1.61237e-11  8.41875e-12  0.0247053           1.86272e6  234531.0
 0.0621561  1.01771e-11  1.86538e-11  0.0886907      879742.0        543484.0
 1.02631    5.01821e-13  2.45007e-11  0.0301335   …       2.41286e6  184740.0
 0.895486   8.61778e-13  2.29969e-11  0.139526            2.05319e6  371141.0
 0.583855   1.44912e-11  7.88175e-12  0.13114        263257.0         24294.4
 0.0094102  9.90535e-12  4.8527e-11   0.00286466          1.15463e6  429786.0
 0.717577   6.3469e-12   1.00734e-11  0.0355233           1.39943e6  571848.0
 0.935647   1.52479e-12  2.96509e-11  0.0938443   …  289375.0         83739.3
 1.31566    5.56105e-12  1.704e-11    0.123792       224704.0        495385.0
 0.542552   2.99572e-12  2.26577e-12  0.0148793           1.56338e6  567779.0
 0.587776   7.84805e-12  4.73215e-11  0.148596            2.65568e6  396245.0
 1.09695    5.13422e-12  5.15498e-11  0.0308256           1.04961e6  349080.0

In [21]:
cond(B), cond(A)


Out[21]:
(40.06796911232546, 1.4800286280224468e22)

In [22]:
U,σ,V=myJacobiR(A);

In [23]:
[sort(σ,rev=true) svdvals(A)]


Out[23]:
15×2 Array{Float64,2}:
      1.86841e11        1.86841e11
      3.91707e6         3.91707e6
 907721.0          907721.0
 269257.0          269257.0
      2.06504           2.06504
      0.255726          0.255726
      0.0915074         0.0915074
      9.73427e-6        9.86782e-6
      3.43426e-7        7.04534e-6
      7.22344e-8        3.31572e-7
      1.04923e-8        7.16697e-8
      1.68296e-9        1.04878e-8
      5.08246e-11       1.62055e-9
      1.56946e-11       2.17374e-10
      1.14613e-11       1.26242e-11

In [24]:
(sort(σ,rev=true)-svdvals(A))./sort(σ,rev=true)


Out[24]:
15-element Array{Float64,1}:
   1.6333425724887844e-16
   0.0
   7.695005873791293e-16
   1.0808959080366572e-15
  -4.9419448863811014e-11
  -2.8676659826630016e-11
  -2.2970962960524042e-11
  -0.013720353178009403
 -19.514857373929708
  -3.5902246206775463
  -5.830718736903724
  -5.2317274297877265
 -30.885248855584667
 -12.850292675543173
  -0.10146089498252073

In [25]:
norm(A*V-U*Diagonal(σ))


Out[25]:
2.6221708501235774e-5

In [26]:
U'*A*V


Out[26]:
15×15 Array{Float64,2}:
  2.06504      -6.89005e-27  -6.78057e-26  …  -3.83516e-10      -1.41212e-10
  1.54292e-16   1.56946e-11   8.76866e-27     -4.14006e-10      -3.42282e-11
  1.24615e-16   2.35895e-27   5.08246e-11     -2.05061e-10      -2.02556e-10
 -7.83406e-17  -9.69434e-27  -5.31572e-27     -3.41163e-10       7.15562e-11
  2.86386e-16  -1.55264e-27   1.20985e-26     -7.1223e-10        1.6072e-10
 -2.16991e-16   1.0433e-27   -9.17898e-27  …   7.1084e-10        5.8642e-11
  9.47213e-16  -3.12472e-26  -9.51417e-26      1.37381e-10       2.32831e-10
 -3.46445e-16  -5.18331e-27  -6.34718e-27     -3.66768e-10      -2.65231e-10
  3.38312e-16  -3.26943e-27   1.71891e-26      1.84659e-10       7.88145e-11
  4.87239e-17   2.23381e-27  -3.08529e-27     -5.78896e-11       9.99867e-11
  8.37539e-17   4.3771e-27   -8.17535e-27  …   3.11554e-10      -3.15761e-11
  2.20451e-17  -1.42061e-26   1.83908e-26      6.03567e-11      -2.68597e-11
 -2.14356e-17  -1.71379e-28   8.66361e-27     -3.48534e-10       4.09376e-11
  5.30143e-17   7.87925e-28  -3.40928e-26      3.91707e6         3.97118e-12
 -3.04483e-16  -1.02588e-26  -2.86267e-26      4.39919e-10  907721.0

In the alternative approach, we first apply QR factorization with column pivoting to obtain the square matrix.


In [14]:
# ?qr

In [27]:
Q=qr(A,Val(true))


Out[27]:
QRPivoted{Float64,Array{Float64,2}}
Q factor:
20×20 LinearAlgebra.QRPackedQ{Float64,Array{Float64,2}}:
 -0.112444   -0.140986   -0.0984488  -0.265965   …  -0.12401      0.139674
 -0.30761    -0.173492   -0.026969    0.316353      -0.062173     0.217562
 -0.131587    0.124404   -0.0213959   0.0725957      0.375906    -0.321711
 -0.176741    0.104992   -0.203598   -0.0911318      0.428445     0.428211
 -0.113207    0.138797   -0.367878   -0.046058       0.110223    -0.0718925
 -0.251978   -0.341749    0.267856    0.122613   …   0.31824     -0.279388
 -0.0747984  -0.430942   -0.21057     0.345211      -0.258051    -0.199455
 -0.0404433  -0.0151586  -0.0962698  -0.414622      -0.193068    -0.15425
 -0.195917   -0.187339    0.115377   -0.349441      -0.28558     -0.20663
 -0.137049   -0.0223417  -0.385025    0.0739882     -0.108372     0.0374665
 -0.359643   -0.0854045   0.390821   -0.152985   …  -0.00687946   0.184922
 -0.0485934  -0.455117   -0.169694    0.0221343      0.0166018    0.113236
 -0.252585    0.307623    0.255554    0.235776      -0.217033     0.123466
 -0.334562    0.200479   -0.0298762  -0.306945      -0.301973     0.0204231
 -0.347245    0.15645    -0.152693    0.258208      -0.0604562   -0.131326
 -0.241295    0.284142    0.179928    0.271983   …  -0.162224    -0.120717
 -0.2036      0.244752   -0.326004   -0.0829824      0.164355    -0.101763
 -0.234853   -0.0526204  -0.247037   -0.0883727      0.00855652  -0.342268
 -0.330456   -0.191124    0.147301   -0.205109       0.344996     0.104228
 -0.106898   -0.110759   -0.181646    0.0931446     -0.174928     0.47173
R factor:
15×15 Array{Float64,2}:
 -1.86841e11  -5.78441e6       -1.40437e6  …  -3.3599e-11   -3.42445e-11
  0.0         -3.89373e6  -332814.0            6.04003e-13   5.374e-12
  0.0          0.0        -892362.0           -9.87085e-12  -8.78974e-12
  0.0          0.0              0.0           -7.796e-12    -1.11488e-11
  0.0          0.0              0.0            1.14428e-12   1.07718e-11
  0.0          0.0              0.0        …   1.24494e-11   5.15322e-12
  0.0          0.0              0.0           -7.6153e-12   -1.7346e-12
  0.0          0.0              0.0            2.84567e-12   7.19501e-12
  0.0          0.0              0.0           -4.38422e-13   6.5112e-13
  0.0          0.0              0.0            3.16754e-12  -6.27715e-13
  0.0          0.0              0.0        …  -2.57106e-12  -1.42583e-12
  0.0          0.0              0.0            1.58531e-11   1.29043e-11
  0.0          0.0              0.0           -1.10633e-12  -2.01778e-12
  0.0          0.0              0.0            1.48213e-11  -3.26778e-12
  0.0          0.0              0.0            0.0          -1.21506e-11
permutation:
15-element Array{Int64,1}:
  7
 14
 15
  6
  1
  4
 10
  8
  5
 11
 13
 12
  3
  2
  9

In [28]:
diag(Q.R)


Out[28]:
15-element Array{Float64,1}:
      -1.868412580632638e11
      -3.8937314255474643e6
 -892361.9189265164
 -275532.3433544032
       2.0629719572574428
       0.25533614311349573
      -0.09173874804024434
       9.734031010789638e-6
       3.432518597774612e-7
      -7.216342046293292e-8
      -1.0506881243271293e-8
       1.6830299503911e-9
       5.077011033350644e-11
       1.482129911908064e-11
      -1.2150640178200612e-11

In [29]:
Q.Q


Out[29]:
20×20 LinearAlgebra.QRPackedQ{Float64,Array{Float64,2}}:
 -0.112444   -0.140986   -0.0984488  -0.265965   …  -0.12401      0.139674
 -0.30761    -0.173492   -0.026969    0.316353      -0.062173     0.217562
 -0.131587    0.124404   -0.0213959   0.0725957      0.375906    -0.321711
 -0.176741    0.104992   -0.203598   -0.0911318      0.428445     0.428211
 -0.113207    0.138797   -0.367878   -0.046058       0.110223    -0.0718925
 -0.251978   -0.341749    0.267856    0.122613   …   0.31824     -0.279388
 -0.0747984  -0.430942   -0.21057     0.345211      -0.258051    -0.199455
 -0.0404433  -0.0151586  -0.0962698  -0.414622      -0.193068    -0.15425
 -0.195917   -0.187339    0.115377   -0.349441      -0.28558     -0.20663
 -0.137049   -0.0223417  -0.385025    0.0739882     -0.108372     0.0374665
 -0.359643   -0.0854045   0.390821   -0.152985   …  -0.00687946   0.184922
 -0.0485934  -0.455117   -0.169694    0.0221343      0.0166018    0.113236
 -0.252585    0.307623    0.255554    0.235776      -0.217033     0.123466
 -0.334562    0.200479   -0.0298762  -0.306945      -0.301973     0.0204231
 -0.347245    0.15645    -0.152693    0.258208      -0.0604562   -0.131326
 -0.241295    0.284142    0.179928    0.271983   …  -0.162224    -0.120717
 -0.2036      0.244752   -0.326004   -0.0829824      0.164355    -0.101763
 -0.234853   -0.0526204  -0.247037   -0.0883727      0.00855652  -0.342268
 -0.330456   -0.191124    0.147301   -0.205109       0.344996     0.104228
 -0.106898   -0.110759   -0.181646    0.0931446     -0.174928     0.47173

In [30]:
Matrix(Q.Q)


Out[30]:
20×15 Array{Float64,2}:
 -0.112444   -0.140986   -0.0984488  -0.265965   …   0.586336   -0.2938
 -0.30761    -0.173492   -0.026969    0.316353       0.320827   -0.243392
 -0.131587    0.124404   -0.0213959   0.0725957      0.104046   -0.0426925
 -0.176741    0.104992   -0.203598   -0.0911318     -0.156369   -0.250208
 -0.113207    0.138797   -0.367878   -0.046058       0.307286   -0.126771
 -0.251978   -0.341749    0.267856    0.122613   …   0.155719    0.248359
 -0.0747984  -0.430942   -0.21057     0.345211      -0.269822   -0.182792
 -0.0404433  -0.0151586  -0.0962698  -0.414622      -0.10745    -0.0755865
 -0.195917   -0.187339    0.115377   -0.349441       0.0661594   0.0289456
 -0.137049   -0.0223417  -0.385025    0.0739882      0.0866767   0.246634
 -0.359643   -0.0854045   0.390821   -0.152985   …  -0.338826   -0.214815
 -0.0485934  -0.455117   -0.169694    0.0221343     -0.113928   -0.0961295
 -0.252585    0.307623    0.255554    0.235776       0.246923    0.274453
 -0.334562    0.200479   -0.0298762  -0.306945      -0.111068    0.00366262
 -0.347245    0.15645    -0.152693    0.258208      -0.197147   -0.227958
 -0.241295    0.284142    0.179928    0.271983   …  -0.0199757  -0.148576
 -0.2036      0.244752   -0.326004   -0.0829824     -0.23243     0.105283
 -0.234853   -0.0526204  -0.247037   -0.0883727      0.024367    0.273641
 -0.330456   -0.191124    0.147301   -0.205109       0.0050408   0.216407
 -0.106898   -0.110759   -0.181646    0.0931446     -0.0899578   0.523364

In [31]:
# Residual
norm(Q.Q*Q.R-A[:,Q.p])


Out[31]:
9.536743850772192e-6

In [33]:
Q.p


Out[33]:
15-element Array{Int64,1}:
  7
 14
 15
  6
  1
  4
 10
  8
  5
 11
 13
 12
  3
  2
  9

In [38]:
UR,σR,VR=myJacobiR(Q.R')


Out[38]:
SVD{Float64,Float64,Adjoint{Float64,Array{Float64,2}}}
U factor:
15×15 Adjoint{Float64,Array{Float64,2}}:
 -1.0           3.17369e-5    5.083e-6     …   8.3654e-23   -4.44853e-23
 -3.0959e-5    -0.993755      0.102281         9.60321e-19  -9.06663e-19
 -7.51636e-6   -0.0903163    -0.972031        -4.81964e-19  -7.69544e-18
 -4.46225e-6   -0.0655208    -0.211412        -1.39229e-17   6.29914e-18
 -1.37314e-11  -1.6901e-7    -6.12031e-7       3.17547e-12   1.27461e-12
 -1.56615e-12  -2.69424e-8   -8.78705e-8   …  -2.26668e-11   4.31515e-12
 -6.36674e-13  -4.05281e-9   -1.23517e-8      -8.75463e-11   1.22555e-10
 -8.06419e-17  -8.03095e-13  -9.12843e-12      4.44077e-7    3.80233e-7
 -3.29941e-18  -2.09987e-15  -1.22714e-13      2.0299e-5    -4.49937e-5
 -6.93236e-19  -2.80612e-15  -1.08961e-14      1.11291e-5    5.16553e-5
 -1.33503e-19  -4.64178e-16  -9.55879e-15  …  -0.000211891   0.000509311
 -2.42736e-20  -1.65746e-16   2.42817e-16     -0.00455446    0.0114328
 -6.7468e-22   -1.41592e-17  -2.86257e-17     -8.14311e-5   -0.0477043
 -1.79827e-22   8.55174e-20  -1.14158e-17      0.876037     -0.481604
 -1.83281e-22   1.30548e-18  -1.05899e-17     -0.482222     -0.875015
singular values:
15-element Array{Float64,1}:
      1.8684125815994153e11
      3.9170660051924316e6
 907721.1147305663
 269256.55135098036
      2.0650378552866746
      0.25572560364044084
      0.09150739617275792
      9.734265893348526e-6
      3.4342634019106015e-7
      7.223443219220983e-8
      1.0492266415148127e-8
      1.6829636601742745e-9
      5.082459244795212e-11
      1.569456689234235e-11
      1.1461292180366464e-11
Vt factor:
15×15 Adjoint{Float64,Array{Float64,2}}:
  1.0           6.64566e-10   4.02125e-11  …   0.0           0.0
 -6.65353e-10   0.999711      0.0235968        7.33368e-40  -9.50658e-40
 -2.46945e-11  -0.0238441     0.997654         2.16703e-44   1.64084e-43
 -1.93321e-12  -0.00308375   -0.064264         0.0           0.0
  5.53719e-23   5.15674e-14   1.30123e-12     -4.06277e-24  -3.07663e-23
  5.63701e-25   1.60569e-15   3.2766e-14   …   2.60593e-21  -9.55875e-22
 -1.00925e-25   9.7011e-17    2.38219e-15     -1.22942e-20   2.31053e-21
 -5.01208e-33  -2.08473e-24   5.84843e-23      1.97011e-13  -9.22602e-13
 -3.48995e-36  -2.82573e-27   2.75885e-26     -7.59369e-11  -6.77133e-11
  2.79683e-37   1.74423e-28   1.52278e-27      9.4339e-9     1.42176e-9
  9.99986e-39   6.91417e-30  -8.32343e-29  …  -2.9617e-7     1.54605e-7
  9.58526e-41   3.8681e-32   -1.2357e-30       6.8098e-5    -5.53737e-5
  1.06974e-43   5.77408e-35  -3.04898e-33     -0.0040063     0.0100075
 -7.02689e-45  -3.87079e-36   9.92027e-36      0.927697      0.373333
  2.72883e-45   2.66878e-36   9.78431e-35     -0.373312      0.927643

In [39]:
(sort(σ)-sort(σR))./sort(σ)


Out[39]:
15-element Array{Float64,1}:
  7.048014780829774e-16
  6.176355722237499e-16
  5.086001263808197e-16
  0.0
  3.153486881952566e-16
  1.2825494129921362e-15
  1.387360190733984e-15
  1.7403119177853583e-15
  1.364917979247596e-15
  8.682924265856211e-16
  1.5053595559965583e-15
  1.0808959080366572e-15
  1.2825009789652155e-16
 -1.188801226965444e-16
  1.6333425724887844e-16

Now $QRP^T=A$ and $R^T=U_R\Sigma_R V^T_R$, so

$$ A=(Q V_R) \Sigma_R (U_R^T P^T) $$

is an SVD of $A$.


In [40]:
# Check the residual
U₁=Q.Q*VR
V₁=UR[invperm(Q.p),:]
norm(A*V₁-U₁*Diagonal(σR))


Out[40]:
2.6781383890930834e-5

Lanczos method

The function svds() is based on the Lanczos method for symmetric matrices. Input can be matrix, but also an operator which defines the product of the given matrix with a vector.


In [41]:
using Arpack

In [44]:
?svds;

In [45]:
m=20
n=15
A=rand(m,n);

In [46]:
U,σ,V=svd(A);

In [48]:
# Some largest singular values
k=6
σ₁,rest=svds(A,nsv=k);
(σ[1:k]-σ₁.S)./σ[1:k]


Out[48]:
6-element Array{Float64,1}:
  3.914050770947551e-16
  0.0
  1.1125742337808936e-16
  7.5279115900956e-16
 -1.388435003833983e-16
 -1.479660594831049e-16

Example - Large matrix


In [49]:
m=2000
n=1500
Ab=rand(m,n);

In [50]:
@time Ub,σb,Vb=svd(Ab);


  3.974294 seconds (17 allocations: 114.625 MiB, 0.75% gc time)

In [52]:
# This is rather slow
k=10
@time σl,rest=svds(Ab,nsv=k);


  1.019809 seconds (1.20 k allocations: 1.043 MiB)

In [53]:
(σb[1:k]-σl.S)./σb[1:k]


Out[53]:
10-element Array{Float64,1}:
 -3.936332466932271e-16
 -7.407291823397613e-16
 -7.449527183881937e-16
  1.0439670097470796e-15
  1.7969458776081513e-15
 -6.009753134942673e-16
  3.006847211740453e-16
 -6.023617501532883e-16
  1.6631058984130352e-15
  1.5162997913310521e-15

Example - Very large sparse matrix


In [54]:
using SparseArrays

In [57]:
?sprand;

In [59]:
m=10000
n=3000
A=sprand(m,n,0.05)


Out[59]:
10000×3000 SparseMatrixCSC{Float64,Int64} with 1501072 stored entries:
  [22  ,    1]  =  0.802364
  [74  ,    1]  =  0.43421
  [80  ,    1]  =  0.52239
  [105 ,    1]  =  0.14799
  [113 ,    1]  =  0.864498
  [120 ,    1]  =  0.101833
  [123 ,    1]  =  0.443123
  [131 ,    1]  =  0.024508
  [138 ,    1]  =  0.305495
  [139 ,    1]  =  0.858845
  [144 ,    1]  =  0.496123
  [152 ,    1]  =  0.961757
  ⋮
  [9822, 3000]  =  0.602639
  [9848, 3000]  =  0.243586
  [9852, 3000]  =  0.881346
  [9864, 3000]  =  0.470442
  [9873, 3000]  =  0.341248
  [9886, 3000]  =  0.663121
  [9890, 3000]  =  0.834981
  [9906, 3000]  =  0.845998
  [9932, 3000]  =  0.482168
  [9938, 3000]  =  0.752005
  [9944, 3000]  =  0.287297
  [9964, 3000]  =  0.947246
  [9989, 3000]  =  0.233218

In [60]:
# No vectors, this takes about 5 sec.
k=100
@time σ₁,rest=svds(A,nsv=k,ritzvec=false)


  6.563805 seconds (1.43 M allocations: 76.054 MiB, 1.38% gc time)
Out[60]:
(SVD{Float64,Float64,Array{Float64,2}}(Array{Float64}(undef,3000,0), [137.82545363951627, 19.659555486091527, 19.565213616705194, 19.552536292496963, 19.525049221357257, 19.509740222311038, 19.49319173802272, 19.485164996336295, 19.475479809062556, 19.45299990484318  …  18.551484534609237, 18.546353671827013, 18.541421809634283, 18.523386678090063, 18.520465841997485, 18.5111918234943, 18.501172756557644, 18.49694636781659, 18.493792184262407, 18.480182270274355], Array{Float64}(undef,0,10000)), 100, 10, 782, [0.12964638047275612, 0.6504774020435526, -0.018688087561580594, -0.5696989630264966, -0.03538359951351974, -2.61153853488529, -0.38859510115457574, -0.7066933659212471, 1.0356511210504786, 1.5107207358362402  …  -1.6524165278905782, -1.508442943223999, -0.6223001004782408, -2.524280742211434, 0.11415165590634546, 1.331551765542227, -1.0537782455648523, -0.5203888705695237, -1.308278016462071, -0.16423274623187528])

In [61]:
# Full matrix
@time σ₂=svdvals(Matrix(A));


 20.460368 seconds (34.80 k allocations: 461.289 MiB, 0.10% gc time)

In [62]:
maximum(abs,(σ₁.S-σ₂[1:k])./σ₂[1:k])


Out[62]:
5.416498806286508e-15

In [ ]: